Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder. However, in contrast to passages or sentences, retrieval on long documents suffers from the scope hypothesis that a long document may cover multiple topics. This maximizes their structure heterogeneity and poses a granular-mismatch issue, leading to an inferior distillation efficacy. In this work, we propose a new learning framework, fine-grained distillation (FGD), for long-document retrievers. While preserving the conventional dense retrieval paradigm, it first produces global-consistent representations crossing different fine granularity and then applies multi-granular aligned distillation merely during training. In experiments, we evaluate our framework on two long-document retrieval benchmarks, which show state-of-the-art performance.
translated by 谷歌翻译
With the success of the prompt-tuning paradigm in Natural Language Processing (NLP), various prompt templates have been proposed to further stimulate specific knowledge for serving downstream tasks, e.g., machine translation, text generation, relation extraction, and so on. Existing prompt templates are mainly shared among all training samples with the information of task description. However, training samples are quite diverse. The sharing task description is unable to stimulate the unique task-related information in each training sample, especially for tasks with the finite-label space. To exploit the unique task-related information, we imitate the human decision process which aims to find the contrastive attributes between the objective factual and their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual \textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class classification, e.g., relation classification, topic classification, and entity typing. Compared with simple classification tasks, these tasks have more complex finite-label spaces and are more rigorous for prompts. First of all, we prune the finite label space to construct fact-counterfactual pairs. Then, we exploit the contrastive attributes by projecting training instances onto every fact-counterfactual pair. We further set up global prototypes corresponding with all contrastive attributes for selecting valid contrastive attributes as additional tokens in the prompt template. Finally, a simple Siamese representation learning is employed to enhance the robustness of the model. We conduct experiments on relation classification, topic classification, and entity typing tasks in both fully supervised setting and few-shot setting. The results indicate that our model outperforms former baselines.
translated by 谷歌翻译
二进制神经网络(BNNS)对现实世界中嵌入式设备显示出巨大的希望。作为实现强大BNN的关键步骤之一,规模因子计算在减少其实价对应物的性能差距方面起着至关重要的作用。然而,现有的BNN忽略了实价重量和尺度因子的固有双线关系,从而导致训练过程不足引起的亚最佳模型。为了解决这个问题,提出了复发性双线性优化,以通过将固有的双线性变量关联到背面传播过程中,以改善BNNS(RBONN)的学习过程。我们的工作是从双线性角度优化BNN的首次尝试。具体而言,我们采用经常​​性优化和密度 - 列表来依次回溯稀疏的实价过滤器,该过滤器将经过充分的训练并基于可控的学习过程达到其性能限制。我们获得了强大的rbonn,在各种模型和数据集上的最先进的BNN上表现出令人印象深刻的性能。特别是,在对象检测的任务下,rbonn具有出色的概括性能。我们的代码在https://github.com/stevetsui/rbonn上进行开源。
translated by 谷歌翻译
Stylegan家族是无条件产生的最受欢迎的生成对抗网络(GAN)之一。尽管其性能令人印象深刻,但其对存储和计算的需求很高,仍阻碍了他们在资源约束设备上的部署。本文提供了对流行风格的建筑的蒸馏的全面研究。我们的关键见解是,StyleGAN蒸馏的主要挑战在于输出差异问题,在该问题中,教师和学生模型在给定相同的输入潜在代码的情况下产生不同的输出。标准知识蒸馏损失通常在这种异质蒸馏场景下失败。我们对此差异问题的原因和影响进行彻底分析,并确定映射网络在确定生成图像的语义信息中起着至关重要的作用。基于这一发现,我们为学生模型提出了一种新颖的初始化策略,该策略可以确保最大程度的输出一致性。为了进一步增强教师和学生模型之间的语义一致性,我们提出了基于潜在的蒸馏损失,可保留潜在空间中的语义关系。广泛的实验证明了我们的方法在蒸馏式stylegan2和stylegan3中的有效性,超过了现有的gan蒸馏方法。
translated by 谷歌翻译
作为“进化计算研究中的新领域”,进化转移优化(ETO)将克服传统的零重复利用相关经验和知识的范式,这些范式在进化计算研究中解决了过去的问题。在通过ETO的计划申请中,可以为智能调度和绿色日程安排形成一个非常吸引人且高度竞争的框架“会议”,尤其是对于来自中国的“碳中立性”的誓言。据我们所知,当多目标优化问题“满足”离散案例中的单目标优化问题(而不是多任务优化)时,我们在此处安排的论文是一类ETO框架的第一项工作。更具体地说,可以通过新的核心转移机制和学习技巧来使用用于置换流程调度问题(PFSP)的新核心转移机制和学习技术,可以使用用于工业应用传达的关键知识,例如具有遗传算法的位置构建块。关于良好研究基准的广泛研究验证了我们提出的ETO-PFSP框架的企业有效性和巨大的普遍性。我们的调查(1)丰富了ETO框架,(2)有助于遗传算法和模因算法的基本基础的经典和基本理论,(3)(3)朝着通过范例和范式进行学习的范式进行进化调整的范式转移,中国“工业情报”的“基于知识和建筑块的计划”(KAB2S)。
translated by 谷歌翻译
排名者在事实上的“检索和rerank”管道中起着必不可少的作用,但其训练仍然落后 - 从中​​度的负面因素或/和/和/和作为回收者的辅助模块中学习。在这项工作中,我们首先确定了强大的排名者的两个主要障碍,即是由训练有素的回猎犬和非理想的负面负面的固有标签噪声,该噪声是为高能力的排名所采样的。因此,我们提出多个检索器,因为负面发电机改善了排名者的鲁棒性,其中i)涉及广泛的分发标签噪声,使排名者与每个噪声分布相对,而ii)与排名相对较接近排名负分配,导致更具挑战性的培训。为了评估我们的强大排名者(称为r $^2 $ anker),我们在各种环境中进行了有关流行通道检索基准测试的各种实验,包括BM25级,全等级,回收者蒸馏等。经验结果验证了新的州 - 新州 - 新州 - 我们模型的效果。
translated by 谷歌翻译
检测定向对象以及估计其旋转信息是用于分析遥感图像的一个关键步骤。尽管最近提出了许多方法,但大多数人直接学习在仅单独的一个(例如旋转角度)的监督下预测对象方向或仅为几(例如旋转角度)或几(例如若干坐标)地基值。在训练期间采用了关于提议和旋转信息回归的额外约束,在额外约束,在训练期间采用了更准确的对象检测。为此,我们创新地提出了一种通过Naive几何计算以一致的方式同时学习物体的水平提出,面向建议和旋转角度的机制,作为一个额外的稳定约束(参见图1)。提出了一个导向的中心先前引导标签分配策略,以进一步提高建议的质量,产生更好的性能。广泛的实验表明,配备我们的想法的模型显着优于基线,通过大幅度来实现新的最先进的结果,在推理期间没有任何额外的计算负担。我们提出的想法简单直观,可以随时实现。源代码和培训的型号涉及补充文件。
translated by 谷歌翻译
实时点云处理是大量计算机视觉任务的基础,而资源限制边缘设备上的计算问题仍然挑战。为了解决这个问题,我们实现了基于Xnor-Net的二进制神经网络(BNN),以实现有效的点云处理,但由于两个主要缺点,高斯分布的权重和非学习规模因子,其性能严重遭受。在本文中,我们将基于期望最大化(POEM)引入BNN的Pock-Wise操作,以实现有效点云处理。EM算法可以有效地限制强大的双模态分布的权重。我们领导了精心设计的重建损失,以计算可学习的尺度因素,以提高1位全连接(Bi-Fc)层的表示能力。广泛的实验表明,我们的诗超越了现有的现有二进制云网络,其显着的边距高达6.7%。
translated by 谷歌翻译
An enhanced geothermal system is essential to provide sustainable and long-term geothermal energy supplies and reduce carbon emissions. Optimal well-control scheme for effective heat extraction and improved heat sweep efficiency plays a significant role in geothermal development. However, the optimization performance of most existing optimization algorithms deteriorates as dimension increases. To solve this issue, a novel surrogate-assisted level-based learning evolutionary search algorithm (SLLES) is proposed for heat extraction optimization of enhanced geothermal system. SLLES consists of classifier-assisted level-based learning pre-screen part and local evolutionary search part. The cooperation of the two parts has realized the balance between the exploration and exploitation during the optimization process. After iteratively sampling from the design space, the robustness and effectiveness of the algorithm are proven to be improved significantly. To the best of our knowledge, the proposed algorithm holds state-of-the-art simulation-involved optimization framework. Comparative experiments have been conducted on benchmark functions, a two-dimensional fractured reservoir and a three-dimensional enhanced geothermal system. The proposed algorithm outperforms other five state-of-the-art surrogate-assisted algorithms on all selected benchmark functions. The results on the two heat extraction cases also demonstrate that SLLES can achieve superior optimization performance compared with traditional evolutionary algorithm and other surrogate-assisted algorithms. This work lays a solid basis for efficient geothermal extraction of enhanced geothermal system and sheds light on the model management strategies of data-driven optimization in the areas of energy exploitation.
translated by 谷歌翻译
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
translated by 谷歌翻译